17 research outputs found

    Robot imagination system

    Get PDF
    Mención Internacional en el título de doctorThis thesis presents the Robot Imagination System (RIS). This system provides a convenient mechanism for a robot to learn a user's descriptive vocabulary, and how it relates to the world for action. With RIS, a user can describe unfamiliar objects to a robot, and the robot will understand the description as long as it is a combination of words that have been previously used to describe other objects. One of the core uses of the RIS functionality is object recognition. Allowing requests with word combinations that have never been presented before together is well beyond the scope of many of the most relevant state of the art object recognition systems. RIS is not limited to object recognition. Through the use of evolutionary algorithms, the system endows the robot with the capability of generating a mental model (imagination) of a requested unfamiliar object. This capability allows the robot to work with this newly generated model within its simulations, or to expose the model to a user by projecting it on a screen or drawing the mental model as feedback so the user can provide a more detailed description if required. A new paradigm for robot action based on consequences on the environment has been integrated within the RIS architecture. Changes in the environment are continuously tracked, and actions are considered complete when the performed effects are closest to the desired effects, in a closed perception loop. Experimental validations have been performed in real environments using the humanoid robot Teo, bringing the Robot Imagination System closer to everyday household environments in the near future. ----------------------Esta tesis presenta el Sistema de Imaginación para Robots (RIS, por sus siglas en inglés). Este sistema proporciona un mecanismo conveniente para que un robot aprenda el vocabulario que un usuario utiliza para descripciones, y cómo esto se relaciona con el mundo. Con RIS, un usuario puede describir un objeto desconocido, y el robot entendería la descripción mientras ésta sea una combinación de palabras que hayan sido utilizadas previamente para describir otros objetos. Uno de los usos principales de la funcionalidad de RIS es el reconocimiento de objetos. Permitir consultas con combinaciones de palabras que nunca han sido presentadas juntas con anterioridad sobrepasa el alcance de una gran porción de los sistemas relevantes de reconocimiento de objetos del estado del arte. RIS no está limitado al reconocimiento de objetos. A través de algoritmos evolutivos, el sistema proporciona a un robot la capacidad de generar un modelo mental (imaginación) a partir de la consulta de un objeto desconocido. Esta capacidad permite que el robot trabaje con este modelo nuevamente generado en sus simulaciones, o exponer el modelo a un usuario proyectándolo en una pantalla o dibujando el modelo mental para cerrar un lazo donde el usuario puede proporcionar una descripción más detallada si esto se requiere. Un nuevo paradigma de la acción en robots, basado en las consecuencias en el entorno, ha sido introducido en la arquitectura RIS. Los cambios en el entorno se monitorizan continuamente, y las acciones se consideran completas cuando los efectos realizados maximizan su parecido a los efectos deseados, en un lazo cerrado de percepción. Se han realizado validaciones experimentales en entornos reales utilizando el robot humanoide Teo, acercando el Sistema de Imaginación para Robots a entornos domésticos cotidianos para un futuro cercano.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Thrishantha Nanayakkara.- Secretario: Luis Santiago Garrido Bullón.- Vocal: Vicente Matellán Oliver

    New trends and challenges in the automatic generation of new tasks for humanoid robots

    Get PDF
    Proceeding of: RobCity16: Robots for citizens: Open Conference on Future Trends in Robotics, Madrid May 26-27 2016In this paper, the study and implementation of a task generation system that uses the information obtained from the user and already known cases is presented. One of the main objectives of the system is to introduce a new approach in robotics that takes into account the physical limitation of teaching and learning time, and thus the amount of knowledge that a robot can obtain of a given environment (tasks, objects, user preferences...), as a critical bottleneck of any robotic system. For this, the study of the Case Based Reasoning (CBR) problem is presented. Additionally, Base Trajec-tory Combination (BATC), a novel trajectory combination method based on a simplified CBR structure, using trajectories instead of high-level tasks, is proposed and explained. Finally, this system is tested with Move-it! as the simulation environment, using the humanoid robot TEO from Universidad Carlos III de Madrid as the robotic platform. The results of these experiments are also presented with the corresponding conclusions and future research lines.This research has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. Fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU

    Sign Language Representation by TEO Humanoid Robot: End-User Interest, Comprehension and Satisfaction

    Get PDF
    In this paper, we illustrate our work on improving the accessibility of Cyber&-Physical Systems (CPS), presenting a study on human&-robot interaction where the end-users are either deaf or hearing-impaired people. Current trends in robotic designs include devices with robotic arms and hands capable of performing manipulation and grasping tasks. This paper focuses on how these devices can be used for a different purpose, which is that of enabling robotic communication via sign language. For the study, several tests and questionnaires are run to check and measure how end-users feel about interpreting sign language represented by a humanoid robotic assistant as opposed to subtitles on a screen. Stemming from this dichotomy, dactylology, basic vocabulary representation and end-user satisfaction are the main topics covered by a delivered form, in which additional commentaries are valued and taken into consideration for further decision taking regarding robot-human interaction. The experiments were performed using TEO, a household companion humanoid robot developed at the University Carlos III de Madrid (UC3M), via representations in Spanish Sign Language (LSE), and a total of 16 deaf and hearing-impaired participants.The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU

    A study on machine vision techniques for the inspection of health personnels' protective suits for the treatment of patients in extreme isolation

    Get PDF
    The examination of Personal Protective Equipment (PPE) to assure the complete integrity of health personnel in contact with infected patients is one of the most necessary tasks when treating patients affected by infectious diseases, such as Ebola. This work focuses on the study of machine vision techniques for the detection of possible defects on the PPE that could arise after contact with the aforementioned pathological patients. A preliminary study on the use of image classification algorithms to identify blood stains on PPE subsequent to the treatment of the infected patient is presented. To produce training data for these algorithms, a synthetic dataset was generated from a simulated model of a PPE suit with blood stains. Furthermore, the study proceeded with the utilization of images of the PPE with a physical emulation of blood stains, taken by a real prototype. The dataset reveals a great imbalance between positive and negative samples; therefore, all the selected classification algorithms are able to manage this kind of data. Classifiers range from Logistic Regression and Support Vector Machines, to bagging and boosting techniques such as Random Forest, Adaptive Boosting, Gradient Boosting and eXtreme Gradient Boosting. All these algorithms were evaluated on accuracy, precision, recall and F1 score; and additionally, execution times were considered. The obtained results report promising outcomes of all the classifiers, and, in particular Logistic Regression resulted to be the most suitable classification algorithm in terms of F1 score and execution time, considering both datasets.The research leading to these results received funding from: Inspección robotizada de los trajes de proteccion del personal sanitario de pacientes en aislamiento de alto nivel, incluido el ébola, Programa Explora Ciencia, Ministerio de Ciencia, Innovación y Universidades (DPI2015-72015-EXP); the RoboCity2030-DIH-CM Madrid Robotics Digital Innovation Hub (“Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase IV”; S2018/NMT-4331), funded by “Programas de Actividades I+D en la Comunidad de Madrid” and cofunded by Structural Funds of the EU; and ROBOESPAS: Active rehabilitation of patients with upper limb spasticity using collaborative robots, Ministerio de Economía, Industria y Competitividad, Programa Estatal de I+D+i Orientada a los Retos de la Sociedad (DPI2017-87562-C2-1-R)

    Enabling garment-agnostic laundry tasks for a Robot Household Companion

    Get PDF
    Domestic chores, such as laundry tasks, are dull and repetitive. These tasks consume a significant amount of daily time, and are however unavoidable. Additionally, a great portion of elder and disabled people require help to perform them due to lack of mobility. In this work we present advances towards a Robot Household Companion (RHC), focusing on the performance of two particular laundry tasks: unfolding and ironing garments. Unfolding is required to recognize the garment prior to any later folding operation. For unfolding, we apply an interactive algorithm based on the analysis of a colored 3D reconstruction of the garment. Regions are clustered based on height, and a bumpiness value is computed to determine the most suitable pick and place points to unfold the overlapping region. For ironing, a custom Wrinkleness Local Descriptor (WiLD) descriptor is applied to a 3D reconstruction to find the most significant wrinkles in the garment. These wrinkles are then ironed using an iterative path-following control algorithm that regulates the amount of pressure exerted on the garment. Both algorithms focus on the feasibility of a physical implementation in real unmodified environments. A set of experiments to validate the algorithms have been performed using a full-sized humanoid robot.This work was supported by RoboCity2030-III-CM project (S2013/MIT-2748), funded by Programas de Actividades I+D in Comunidad de Madrid, Spain and EU and by a FPU grant funded by Ministerio de Educación, Cultura y Deporte, Spain. It was also supported by the anonymous donor of a red hoodie used in our initial trials. We gratefully acknowledge the support of NVIDIA, United States Corporation with the donation of the NVIDIA Titan X GPU used for this research

    Towards clothes hanging via cloth simulation and deep convolutional networks

    Get PDF
    Proceeding of: 10th EUROSIM Congress on Modelling and Simulation (EUROSIM 2019), Logroño, La Rioja, Spain, July 1-5, 2019People spend several hours a week doing laundry, with hanging clothes being one of the laundry tasks to be performed. Nevertheless, deformable object manipulation still proves to be a challenge for most robotic systems, due to the extremely large number of internal degrees of freedom of a piece of clothing and its chaotic nature. This work presents a step towards automated robot clothes hanging by modeling the dynamics of the hanging task via deep convolutional models. Two models are developed to address two different problems: determining if the garment will hang or not (classification), and estimating the future garment location in space (regression). Both models have been trained with a synthetic dataset formed by 15k examples generated though a dynamic simulation of a deformable object. Experiments show that the deep convolutional models presented perform better than a human expert, and that future predictions are largely influenced by time, with uncertainty influencing directly the accuracy of the predictions.This work was supported by RoboCity2030-III-CM project (S2013/MIT-2748), funded by Programas de Actividades I+D in Comunidad de Madrid and EU and by a FPU grant funded by Ministerio de Educación, Cultura y Deporte. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the NVIDIA Titan X GPU used for this research

    Improving CGDA execution through genetic algorithms incorporating spatial and velocity constraints

    Get PDF
    Proceedings of: 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 26-28 April 2017, Coimbra, Portugal.In the Continuous Goal Directed Actions (CGDA) framework, actions are modelled as time series which contain the variations of object and environment features. As robot joint trajectories are not explicitly encoded in CGDA, Evolutionary Algorithms (EA) are used for the execution of these actions. These computations usually require a large number of evaluations. As a consequence of this, these evaluations are performed in a simulated environment, and the computed trajectory is then transferred to the physical robot. In this paper, constraints are introduced in the CGDA framework, as a way to reduce the number of evaluations needed by the system to converge to the optimal robot joint trajectory. Specifically, spatial and velocity constraints are introduced in the framework. Their effects in two different CGDA commonly studied use cases (the “wax” and “paint” actions) are analyzed and compared. The experimental results obtained using these constraints are compared with those obtained with the Steady State Tournament (SST) algorithm used in the original proposal of CGDA. Conclusions extracted from this study depict a high reduction in the required number of evaluations when incorporating spatial constraints. Velocity constraints provide however less promising results, which will be discussed within the context of previous CGDA works.The research leading to these results has received funding from the RoboCity2030-III-CM project (Robtica aplicada a la mejora de la calidad de vida de los ciudadanos. fase Ill; S2013IMIT-2748), funded by Program as de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU, and by a FPU grant funded by Miniesterio de Educacion, Cultura y deporte

    Reducing the number of evaluations required for CGDA execution through Particle Swarm Optimization methods

    Get PDF
    Proceedings of: 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 26-28 April 2017, Coimbra, Portugal.Continuous Goal Directed Actions (CGDA) is a robot learning framework that encodes actions as time series of object and environment scalar features. As the execution of actions is not encoded explicitly, robot joint trajectories are computed through Evolutionary Algorithms (EA), which require a large number of evaluations. The consequence is that evaluations are performed in a simulated environment, and the optimal robot trajectory computed is then transferred to the actual robot. This paper focuses on reducing the number of evaluations required for computing an optimal robot joint trajectory. Particle Swarm Optimization (PSO) methods have been adapted to the CGDA framework to be studied and compared: naíve PSO, Adaptive Fuzzy Fitness Granulation PSO (AFFG-PSO), and Fitness Inheritance PSO (FI-PSO). Experiments have been performed for two representative use cases within CGDA: the “wax” and the “painting” action. The experimental results of PSO methods are compared with those obtained with the Steady State Tournament used in the original proposal of CGDA. Conclusions extracted from these results depict a reduction of the number of required evaluations, with simultaneous tradeoff regarding the degree of fulfillment of the objective given by the optimization cost function.The research leading to these results has received funding from the RoboCity2030-III-CM project (Robtica aplicada a la mejora de la calidad de vida de los ciudadanos, fase Ill; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU, and by a FPU grant funded by Miniesterio de Educaci6n, Cultura y deporte

    Real Evaluations Tractability using Continuous Goal-Directed Actions in Smart City Applications

    Get PDF
    One of the most important challenges of Smart City Applications is to adapt the system to interact with non-expert users. Robot imitation frameworks aim to simplify and reduce times of robot programming by allowing users to program directly through action demonstrations. In classical robot imitation frameworks, actions are modelled using joint or Cartesian space trajectories. They accurately describe actions where geometrical characteristics are relevant, such as fixed trajectories from one pose to another. Other features, such as visual ones, are not always well represented with these pure geometrical approaches. Continuous Goal-Directed Actions (CGDA) is an alternative to these conventional methods, as it encodes actions as changes of any selected feature that can be extracted from the environment. As a consequence of this, the robot joint trajectories for execution must be fully computed to comply with this feature-agnostic encoding. This is achieved using Evolutionary Algorithms (EA), which usually requires too many evaluations to perform this evolution step in the actual robot. The current strategies involve performing evaluations in a simulated environment, transferring only the final joint trajectory to the actual robot. Smart City applications involve working in highly dynamic and complex environments, where having a precise model is not always achievable. Our goal is to study the tractability of performing these evaluations directly in a real-world scenario. Two different approaches to reduce the number of evaluations using EA, are proposed and compared. In the first approach, Particle Swarm Optimization (PSO)-based methods have been studied and compared within the CGDA framework: naïve PSO, Fitness Inheritance PSO (FI-PSO), and Adaptive Fuzzy Fitness Granulation with PSO (AFFG-PSO).The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU

    Design of an Active Vision System for High-Level Isolation Units through Q-Learning

    Get PDF
    This article belongs to the Special Issue Applied Intelligent Control and Perception in Robotics and Automation.The inspection of Personal Protective Equipment (PPE) is one of the most necessary measures when treating patients affected by infectious diseases, such as Ebola or COVID-19. Assuring the integrity of health personnel in contact with infected patients has become an important concern in developed countries. This work focuses on the study of Reinforcement Learning (RL) techniques for controlling a scanner prototype in the presence of blood traces on the PPE that could arise after contact with pathological patients. A preliminary study on the design of an agent-environment system able to simulate the required task is presented. The task has been adapted to an environment for the OpenAI Gym toolkit. The evaluation of the agent’s performance has considered the effects of different topological designs and tuning hyperparameters of the Q-Learning model-free algorithm. Results have been evaluated on the basis of average reward and timesteps per episode. The sample-average method applied to the learning rate parameter, as well as a specific epsilon decaying method worked best for the trained agents. The obtained results report promising outcomes of an inspection system able to center and magnify contaminants in the real scanner system.The research leading to these results received funding from: Inspección robotizada de los trajes de protección del personal sanitario de pacientes en aislamiento de alto nivel, incluido el ébola, Programa Explora Ciencia, Ministerio de Ciencia, Innovación y Universidades (DPI2015-72015-EXP); the RoboCity2030-DIH-CM Madrid Robotics Digital Innovation Hub ("Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase IV"; S2018/NMT-4331), funded by "Programas de Actividades I+D en la Comunidad de Madrid" and cofunded by Structural Funds of the EU; and ROBOESPAS: Active rehabilitation of patients with upper limb spasticity using collaborative robots, Ministerio de Economía, Industria y Competitividad, Programa Estatal de I+D+i Orientada a los Retos de la Sociedad (DPI2017-87562-C2-1-R)
    corecore